9 research outputs found

    Importance sampling for stochastic programming

    Get PDF
    Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the exptected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it requires the evaluation of a multidimensional integral whose integrand is an optimization problem. In turn, the recourse function has to be estimated using techniques such as scenario trees or Monte Carlo methods, both of which require numerous function evaluations to produce accurate results for large-scale problems with multiple periods and high-dimensional uncertainty. In this thesis, we introduce an Importance Sampling framework for stochastic programming that can produce accurate estimates of the recourse function using a small number of samples. Previous approaches for importance sampling in stochastic programming were limited to problems where the uncertainty was modelled using discrete random variables, and the recourse function was additively separable in the uncertain dimensions. Our framework avoids these restrictions by pairing Markov Chain Monte Carlo methods with Kernel Density Estimation algorithms to build a non-parametric Importance Sampling distribution, which can then be used to produce a low-variance estimate of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using variants of well-known multistage stochastic programming problems. Our numerical results show that our framework produces more accurate estimates of the optimal value of stochastic programming models, especially for problems with moderate-to-high variance distributions or rare-event distributions. For example, in some applications, we found that if the random variables are drawn from a rare-event distribution, our proposed algorithm can achieve four times reduction in the mean square error and variance given by other existing methods (e.g.: SDDP with Crude Monte Carlo or SDDP with Quasi Monte Carlo method) for the same number of samples. Or when the random variables are drawn from the high variance distribution, our proposed algorithm can reduce the variance averagely by two times compared to the results obtained by other methods for approximately the same level of mean square error and a fixed number of samples. We use our proposed algorithm to solve a capacity expansion planning problem in the electric power industry. The model includes the unit commitment problem and maintenance scheduling. It allows the investors to make optimal decisions on the capacity and the type of generators to build in order to minimize the capital cost and operating cost over a long period of time. Our model computes the optimal schedule for each of the generators while meeting the demand and respecting the engineering constraints of each generator. We use an aggregation method to group generators of similar features, in order to reduce the problem size. The numerical experiment shows that by clustering the generators of the same technology with similar size together and apply the SDDP algorithm with our proposed sampling framework on this simplified formulation, we are able to solve the problem using only one fourth the amount of time to solve the original problem by conventional algorithms. The speed-up is achieved without a significant reduction in the quality of the solution.Open Acces

    Van Der Waals and Casimir Interactions of Some Graphene, Material Plate and CNTs Systems

    Get PDF
    The Van der Waals and Casimir interactions between graphene and a material plate are studied by using the Lifshitz theory and approximate expressions for the free energy and force. The reflection properties of electromagnetic oscillations on graphene are governed by specific boundary conditions imposed on the infinitely thin positively charged plasma sheet, carrying a continuous fluid with some mass and charge density. The obtained formulas are applied to the cases of a graphene interacting with Au plate. We calculated also the Casimir interaction between carbon nanotube single wall and Au plate. The comparision with other recently obtained theoretical results are made and the generalizations to more complicated carbon nanostructures are discussed

    pH-Dependence of the Optical Bio-sensor Based on DNA-carbon Nanotube

    Get PDF
    In 2006, Daniel A. Heller et al. [1] demonstrated that carbon nanotubes (CNNTs) wrapped with DNA can be placed inside living cells and detect trace amounts of harmful contaminants using near infrared light. This discovery could lead to new types of optical sensors and biomarkers at the sub cellular level. The working principle of this optical bio-sensor from DNA and CNNTs can be explained by a simple theoretical model which was introduced in [3]. In this paper, the pH-dependence of DNA and the pH-dependence of solution around CNNTs are shown by using data analysis method. By substituting them into the same model, the pH-dependence of DNA-wrapped CNNTs was elicited in this paper. The range of parameters for workable conditions of this bio-sensor was indicated that the solution should have pH from 6 to 9 and the concentration of ions should be more than a critical value. These results are according to the experimental data and the deduction about pH and salt concentration in solution. They are very useful as using such a new bio-sensor like this in living environment

    Construct and control a PV-based independent public LED street lighting system with an efficient battery management system based on the power line communication

    No full text
    International audienceThis paper presents an adaptive photovoltaic (PV)-based independent public LED street lighting system. A bidirectional LED driver powered by an 80 MHz TM4C123GH6PM microcontroller is used in this system to charge the battery as well as drive the LED. The system use a hybrid charging method that is combined between three-stage charging method and an improved voltage based maximum power point tracking (MPPT). In order to drive the LEDs, a Proportional Integral controller (PI) constant voltage control is used. Furthermore, power line communication is applied to the system to control the power flow

    Importance Sampling in Stochastic Programming: A Markov Chain Monte Carlo Approach

    No full text

    Improving the electrochemical performance of lithium‐ion battery using silica/carbon anode through prelithiation techniques

    No full text
    Abstract This work focuses on the two most common techniques, including the direct contact method (CM) and the electrochemical method (EM) in the half‐cell applied for the SiO2/C anode. After the prelithiation process, the anodes would be assembled in the coin cells paired with NMC622 cathode. According to electrochemical performance, prelithiation techniques could strengthen the initial discharged capacity and Coulombic efficiency. While the nonprelithiated sample exhibits a poor discharged capacity of 48.43 mAh·g−1 and low Coulombic efficiency of 87.41% in the first cycle, the CM and EM methods illustrated a better battery performance. Specifically, the EM4C exhibited a higher initial discharged capacity and Coulombic efficiency (137.06 mAh·g−1 and 95.82%, respectively) compared to the CM30 (99.08 mAh·g−1 and 93.23%, respectively). As a result, this research hopes to bring some remarkable information to improve full‐cell properties using SiO2/C as an anode material by the prelithiation method

    Predicting Future Urban Flood Risk Using Land Change and Hydraulic Modeling in a River Watershed in the Central Province of Vietnam

    No full text
    International audienceFlood risk is a significant challenge for sustainable spatial planning, particularly concerning climate change and urbanization. Phrasing suitable land planning strategies requires assessing future flood risk and predicting the impact of urban sprawl. This study aims to develop an innovative approach combining land use change and hydraulic models to explore future urban flood risk, aiming to reduce it under different vulnerability and exposure scenarios. SPOT-3 and Sentinel-2 images were processed and classified to create land cover maps for 1995 and 2019, and these were used to predict the 2040 land cover using the Land Change Modeler Module of Terrset. Flood risk was computed by combining hazard, exposure, and vulnerability using hydrodynamic modeling and the Analytic Hierarchy Process method. We have compared flood risk in 1995, 2019, and 2040. Although flood risk increases with urbanization, population density, and the number of hospitals in the flood plain, especially in the coastal region, the area exposed to high and very high risks decreases due to a reduction in poverty rate. This study can provide a theoretical framework supporting climate change related to risk assessment in other metropolitan regions. Methodologically, it underlines the importance of using satellite imagery and the continuity of data in the planning-related decision-making process
    corecore